Federated Learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy. With the growing interest in having collaboration among data owners, FL has gained significant attention of organizations. The idea of FL is to enable collaborating participants train machine learning (ML) models on decentralized data without breaching privacy. In simpler words, federated learning is the approach of ``bringing the model to the data, instead of bringing the data to the mode''. Federated learning, when applied to data which is partitioned vertically across participants, is able to build a complete ML model by combining local models trained only using the data with distinct features at the local sites. This architecture of FL is referred to as vertical federated learning (VFL), which differs from the conventional FL on horizontally partitioned data. As VFL is different from conventional FL, it comes with its own issues and challenges. In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL. Additionally, the literature review highlights the existing solutions to challenges in VFL and provides potential research directions in this domain.
translated by 谷歌翻译
We study the problem of combining neural networks with symbolic reasoning. Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference, limiting the scalability of PNL solutions. We introduce Approximate Neurosymbolic Inference (A-NeSI): a new framework for PNL that uses neural networks for scalable approximate inference. A-NeSI 1) performs approximate inference in polynomial time without changing the semantics of probabilistic logics; 2) is trained using data generated by the background knowledge; 3) can generate symbolic explanations of predictions; and 4) can guarantee the satisfaction of logical constraints at test time, which is vital in safety-critical applications. Our experiments show that A-NeSI is the first end-to-end method to scale the Multi-digit MNISTAdd benchmark to sums of 15 MNIST digits, up from 4 in competing systems. Finally, our experiments show that A-NeSI achieves explainability and safety without a penalty in performance.
translated by 谷歌翻译
For improving short-length codes, we demonstrate that classic decoders can also be used with real-valued, neural encoders, i.e., deep-learning based codeword sequence generators. Here, the classical decoder can be a valuable tool to gain insights into these neural codes and shed light on weaknesses. Specifically, the turbo-autoencoder is a recently developed channel coding scheme where both encoder and decoder are replaced by neural networks. We first show that the limited receptive field of convolutional neural network (CNN)-based codes enables the application of the BCJR algorithm to optimally decode them with feasible computational complexity. These maximum a posteriori (MAP) component decoders then are used to form classical (iterative) turbo decoders for parallel or serially concatenated CNN encoders, offering a close-to-maximum likelihood (ML) decoding of the learned codes. To the best of our knowledge, this is the first time that a classical decoding algorithm is applied to a non-trivial, real-valued neural code. Furthermore, as the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
translated by 谷歌翻译
We describe PromptBoosting, a query-efficient procedure for building a text classifier from a neural language model (LM) without access to the LM's parameters, gradients, or hidden representations. This form of "black-box" classifier training has become increasingly important as the cost of training and inference in large-scale LMs grows. But existing black-box LM classifier learning approaches are themselves computationally inefficient, typically specializing LMs to the target task by searching in a large space of (discrete or continuous) prompts using zeroth-order optimization methods. Instead of directly optimizing in prompt space, PromptBoosting obtains a small pool of prompts via a gradient-free approach and then constructs a large pool of weak learners by pairing these prompts with different elements of the LM's output distribution. These weak learners are then ensembled using the AdaBoost algorithm. The entire learning process requires only a small number of forward passes and no backward pass. Experiments show that PromptBoosting achieves state-of-the-art performance in multiple black-box few-shot classification tasks, and matches or outperforms full fine-tuning in both few-shot and standard learning paradigms, while training 10x faster than existing black-box methods.
translated by 谷歌翻译
The preservation, monitoring, and control of water resources has been a major challenge in recent decades. Water resources must be constantly monitored to know the contamination levels of water. To meet this objective, this paper proposes a water monitoring system using autonomous surface vehicles, equipped with water quality sensors, based on a multimodal particle swarm optimization, and the federated learning technique, with Gaussian process as a surrogate model, the AquaFeL-PSO algorithm. The proposed monitoring system has two phases, the exploration phase and the exploitation phase. In the exploration phase, the vehicles examine the surface of the water resource, and with the data acquired by the water quality sensors, a first water quality model is estimated in the central server. In the exploitation phase, the area is divided into action zones using the model estimated in the exploration phase for a better exploitation of the contamination zones. To obtain the final water quality model of the water resource, the models obtained in both phases are combined. The results demonstrate the efficiency of the proposed path planner in obtaining water quality models of the pollution zones, with a 14$\%$ improvement over the other path planners compared, and the entire water resource, obtaining a 400$\%$ better model, as well as in detecting pollution peaks, the improvement in this case study is 4,000$\%$. It was also proven that the results obtained by applying the federated learning technique are very similar to the results of a centralized system.
translated by 谷歌翻译
由于结构化数据通常不足,因此在开发用于临床信息检索和决策支持系统模型时,需要从电子健康记录中的自由文本中提取标签。临床文本中最重要的上下文特性之一是否定,这表明没有发现。我们旨在通过比较荷兰临床注释中的三种否定检测方法来改善标签的大规模提取。我们使用Erasmus医疗中心荷兰临床语料库比较了基于ContextD的基于规则的方法,即使用MEDCAT和(Fineted)基于Roberta的模型的BilstM模型。我们发现,Bilstm和Roberta模型都在F1得分,精度和召回方面始终优于基于规则的模型。此外,我们将每个模型的分类错误系统地分类,这些错误可用于进一步改善特定应用程序的模型性能。在性能方面,将三个模型结合起来并不有益。我们得出的结论是,尤其是基于Bilstm和Roberta的模型在检测临床否定方面非常准确,但是最终,根据手头的用例,这三种方法最终都可以可行。
translated by 谷歌翻译
该注释有三个目的:(i)我们提供了一个独立的说明,表明在可能的(PAC)模型中,连接性查询无法有效地学习,从而明确注意这一概念阶级缺乏这一概念的事实,多项式大小的拟合属性,在许多计算学习理论文献中被默认假设的属性;(ii)我们建立了强大的负PAC可学习性结果,该结果适用于许多限制类别的连接性查询(CQ),包括针对广泛的“无循环”概念的无孔CQ;(iii)我们证明CQ可以通过会员查询有效地学习PAC。
translated by 谷歌翻译
查找最佳消息量化是低复杂性信念传播(BP)解码的关键要求。为此,我们提出了一个浮点替代模型,该模型模仿量化效果,作为均匀噪声的添加,其幅度是可训练的变量。我们验证替代模型与定点实现的行为非常匹配,并提出了手工制作的损失功能,以实现复杂性和误差率性能之间的权衡。然后,采用一种基于深度学习的方法来优化消息位。此外,我们表明参数共享既可以确保实现友好的解决方案,又比独立参数导致更快的培训收敛。我们为5G低密度均衡检查(LDPC)代码提供模拟结果,并在浮点分解的0.2 dB内报告误差率性能,平均消息量化位低于3.1位。此外,我们表明,学到的位宽也将其推广到其他代码速率和渠道。
translated by 谷歌翻译
This paper introduces the novel CNN-based encoder Twin Embedding Network (TEN), for the jigsaw puzzle problem (JPP), which represents a puzzle piece with respect to its boundary in a latent embedding space. Combining this latent representation with a simple distance measure, we demonstrate improved accuracy levels of our newly proposed pairwise compatibility measure (CM), compared to that of various classical methods, for degraded puzzles with eroded tile boundaries. We focus on this problem instance for our case study, as it serves as an appropriate testbed for real-world scenarios. Specifically, we demonstrated an improvement of up to 8.5% and 16.8% in reconstruction accuracy, for so-called Type-1 and Type-2 problem variants, respectively. Furthermore, we also demonstrated that TEN is faster by a few orders of magnitude, on average, than a typical deep neural network (NN) model, i.e., it is as fast as the classical methods. In this regard, the paper makes a significant first attempt at bridging the gap between the relatively low accuracy (of classical methods and the intensive computational complexity (of NN models), for practical, real-world puzzle-like problems.
translated by 谷歌翻译
街道级别图像对原位数据收集进行扩大占据了重要潜力。通过组合使用便宜的高质量相机与最近的深度学习计算解决方案的进步来实现这一点,以推导出相关专题信息。我们介绍了一个框架,用于使用计算机视觉从街道层图像中收集和提取作物类型和候选信息。在2018年生长季节期间,高清图片被捕获在荷兰弗莱洛兰省的侧视动作相机。每个月从3月到10月,调查了一个固定的200公里路线,每秒收集一张照片,结果总计40万个地理标记的图片。在220个特定的包裹物位置,记录了现场作物的观察结果,以获得17种作物。此外,时间跨度包括特定的出苗前包裹阶段,例如用于春季和夏季作物的不同栽培的裸土,以及收获后栽培实践,例如,绿色皱眉和捕捉庄稼。基于与卷积神经网络(MobileNet)的转移学习,使用具有众所周知的图像识别模型的Tensorflow进行分类。开发了一种超核解方法,以获得160型号的表现最佳模型。这种最佳模型应用于独立推理的鉴别作物类型,宏观F1分数为88.1%的宏观效果,在包裹水平的86.9%。讨论了这种方法的潜力和警告以及实施和改进的实际考虑因素。所提出的框架速度升高了高质量的原位数据收集,并通过使用计算机视觉自动分类建议大规模数据收集的途径。
translated by 谷歌翻译